1112 stories
·
0 followers

Learning to read C++ compiler errors: Ambiguous overloaded operator

2 Shares

A customer was adding a feature to an old C++ code base, and the most convenient way to consume the feature was to use C++/WinRT. However, once they added C++/WinRT to their project, they ran into compiler errors in parts of their code that hadn’t changed in decades. As an added wrinkle, the problem occurred only in 32-bit builds.

std::ostream& operator<<(ostream& os, LARGE_INTEGER const& value)
{
    return os << value.QuadPart; // ← error
}

The error complained that the << operator was ambiguous.

contoso.cpp(3141) : error C2593: 'operator <<' is ambiguous
ostream(436): note: could be 'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostream<char,std::char_traits<char>>::operator <<(long double)'
ostream(418): note: or       'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostream<char,std::char_traits<char>>::operator <<(double)'
ostream(400): note: or       'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostream<char,std::char_traits<char>>::operator <<(float)'
ostream(382): note: or       'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostream<char,std::char_traits<char>>::operator <<(unsigned __int64)'
ostream(364): note: or       'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostream<char,std::char_traits<char>>::operator <<(__int64)'
ostream(346): note: or       'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostream<char,std::char_traits<char>>::operator <<(unsigned long)'
ostream(328): note: or       'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostream<char,std::char_traits<char>>::operator <<(long)'
ostream(309): note: or       'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostream<char,std::char_traits<char>>::operator <<(unsigned int)'
ostream(283): note: or       'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostream<char,std::char_traits<char>>::operator <<(int)'
ostream(264): note: or       'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostream<char,std::char_traits<char>>::operator <<(unsigned short)'
ostream(230): note: or       'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostream<char,std::char_traits<char>>::operator <<(short)'
ostream(212): note: or       'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostream<char,std::char_traits<char>>::operator <<(bool)'
contoso.h(1554): note: or       'std::ostream &operator <<(std::ostream &,const unsigned __int64 &)'
contoso.h(1548): note: or       'std::ostream &operator <<(std::ostream &,const __int64 &)'
ostream(953): note: or       'std::basic_ostream<char,std::char_traits<char>> &std::operator <<<std::char_traits<char>>(std::basic_ostream<char,std::char_traits<char>> &,unsigned char)'
ostream(942): note: or       'std::basic_ostream<char,std::char_traits<char>> &std::operator <<<std::char_traits<char>>(std::basic_ostream<char,std::char_traits<char>> &,signed char)'
ostream(819): note: or       'std::basic_ostream<char,std::char_traits<char>> &std::operator <<<std::char_traits<char>>(std::basic_ostream<char,std::char_traits<char>> &,char)'
ostream(738): note: or       'std::basic_ostream<char,std::char_traits<char>> &std::operator <<<char,std::char_traits<char>>(std::basic_ostream<char,std::char_traits<char>> &,char)'
contoso.cpp(1582): note: while trying to match the argument list '(std::ostream, LONGLONG)'

All of these are different overloads of std::ostream& std::ostream::operator<<(something) or std::ostream& operator<<(std::ostream&, something), so let’s remove all the repeated stuff to make it easier to see what we are up against.

ostream(436): long double
ostream(418): double
ostream(400): float
ostream(382): unsigned __int64
ostream(364): __int64
ostream(346): unsigned long
ostream(328): long
ostream(309): unsigned int
ostream(283): int
ostream(264): unsigned short
ostream(230): short
ostream(212): bool
contoso.h(1554): const unsigned __int64 &
contoso.h(1548): const __int64 &
ostream(953): unsigned char
ostream(942): signed char
ostream(819): char
ostream(738): char

From the code, we see that the intention is to use the insertion operator that takes a signed 64-bit integer, so let’s filter down to those.

ostream(364): __int64
contoso.h(1548): const __int64 &

Aha, now we see the conflict. The C++ standard library (<ostream>) has defined an output inserter for __int64, and the customer has defined an output inserter for const __int64&, so the compiler can’t choose between them.

The compiler kindly provided line numbers, so we can look at the conflict introduced by contoso.h.

#if !defined(_WIN64) && !defined(_STL70_) && !defined(_STL110_)

// These are already defined in STL
std::ostream& operator<<(std::ostream&, const __int64& );
std::ostream& operator<<(ostd::stream&, const unsigned __int64& );

#endif /* _WIN64 _STL70_ _STL110_ */

Okay, well the !defined(_WIN64) explains why this problem occurs only in 32-bit builds: The conflicting definition is #if‘d out in 64-bit builds.

The rest of the #if expression removes the conflicting definition for STL versions 7.0 and 11.0. So what happened to reactivate this code path?

Adding C++/WinRT to the project.

C++/WinRT requires C++17 or later, which means that the project had to bump its compiler version, and that pushed the STL version to 12.0. And their custom #if doesn’t handle that case.

I went back through the project history, and saw that about five years ago, the line was just

#if !defined(_WIN64) && !defined(_STL70_)

So I’m guessing that at some point in the past five years, they upgraded their compiler version, and they ran into this exact problem, and they realized, “Oh, we need to suppress this for STL 11.0, too,” and they added the !defined(_STL110_).

History repeats itself.

One solution is to put another layer of duct tape on it.

#if !defined(_WIN64) && !defined(_STL70_) && !defined(_STL110_) && !defined(_STL120_)

Of course, this just means that in another five years, when they decide to upgrade to C++30, this problem will come back and somebody will have to add yet another layer of duct tape.

So they could choose something that is a bit more forward-compatible:

#if !defined(_WIN64) && !defined(_STL70_) && !defined(_STL110_) && __cplusplus < 201700

Or they could just delete the entire block. I doubt they are going to roll their compiler back to C++03.

The post Learning to read C++ compiler errors: Ambiguous overloaded operator appeared first on The Old New Thing.

Read the whole story
Share this story
Delete

US Military Tested Device That May Be Tied To Havana Syndrome On Rats, Sheep

1 Share
An anonymous reader quotes a report from CBS News: Tonight, we have details of a classified U.S. intelligence mission that has obtained a previously unknown weapon that may finally unlock a mystery. Since at least 2016, U.S. diplomats, spies and military officers have suffered crippling brain injuries. They've told of being hit by an overwhelming force, damaging their vision, hearing, sense of balance and cognition. but the government has doubted their stories. They've been called delusional. Well now, 60 Minutes has learned that a weapon that can inflict these injuries was obtained overseas and secretly tested on animals on a U.S. military base. We've investigated this mystery for nine years. This is our fourth story called, "Targeting Americans." Despite official government doubt, we never stopped reporting because of the haunting stories we heard [...]. 60 Minutes interviewed Dr. David Relman, a scientific expert and professor from Stanford University who was tasked by the government to lead two investigations into the Havana Syndrome cases. What he and his panel of doctors, physicists, engineers and others found was that "the most plausible explanation for a subset of these cases was a form of radiofrequency or microwave energy," the report says. According to confidential sources cited in the report, undercover Homeland Security agents bought a miniaturized microwave weapon from a Russian criminal network in 2024 and tested it on animals at a U.S. military lab. The injuries reportedly matched those seen in the human cases. "Our confidential sources tell us the still classified weapon has been tested in a U.S. military lab for more than a year," says Dr. Relman. "Tests on rats and sheep show injuries consistent with those seen in humans." He continues: "Also, as a separate part of the investigation, security camera videos have been collected that show Americans being hit. The videos are classified but they were described to us. In one, a camera in a restaurant in Istanbul captured two FBI agents on vacation sitting at a table with their families. A man with a backpack walks in and suddenly everyone at the table grabs their head as if in pain. Our sources say another video comes from a stairwell in the U.S. embassy in Vienna. The stairs lead to a secure facility. In the video, two people on the stairs suddenly collapse. Those videos and the weapon were among the reasons the Biden administration summoned about half a dozen victims to the White House with about two months left in the president's term." Former intelligence officials and researchers claim elements of the U.S. government downplayed or dismissed the theory for years, possibly to avoid political consequences of accusing a foreign state like Russia of conducting attacks on American personnel.

Read more of this story at Slashdot.

Read the whole story
Share this story
Delete

Could Home-Building Robots Help Fix the Housing Crisis?

1 Share
CNN reports on a company called Automated Architecture (AUAR) which makes "portable" micro-factories that use a robotic arm to produce wooden framing for houses (the walls, floors and roofs): Co-founder Mollie Claypool says the micro-factories will be able to produce the panels quicker, cheaper and more precisely than a timber framing crew, freeing up carpenters to focus on the construction of the building... The micro-factory fits into a shipping container which is sent to the building site along with an operator. Inside the factory, a robotic arm measures, cuts and nails the timber into panels up to 22 feet (6.7 meters) long, keeping gaps for windows and doors, and drilling holes for the wiring and plumbing. The contractor then fits the panels by hand. One micro-factory can produce the panels for a typical house in about a day — a process which, according to Claypool, would take a normal timber framing crew four weeks — and is able to produce framing for buildings up to seven stories tall... She says their service is 30% cheaper than a standard timber framing crew, and up to 15% cheaper than buying panels from large factories and shipping them to a site... She adds that the precision of the micro-factories means that the panels fit together tightly, reducing the heat loss of the final home, making them more energy efficient. AUAR currently has three micro-factories operating in the US and EU, with five more set to be delivered this year... AUAR has raised £7.7 million ($10.3 million) to date, and is expanding into the US, where a lack of housing and preference for using wood makes it a large potential market. There's other companies producing wooden or modular housing components, the article points out. But despite the automation, the company's co-founder insists to CNN that "Automation isn't replacing jobs. Automation is filling the gap." The UK's Construction Industry Training Board found that the country will need 250,000 more workers by 2028 to meet building targets but in 2023, more people left the industry than joined.

Read more of this story at Slashdot.

Read the whole story
Share this story
Delete

Judges Find AI Doesn't Have Human Intelligence in Two New Court Cases

1 Share
Within the last month two U.S> judges have effectively declared AI bots are not human, writes Los Angeles Times columnist Michael Hiltzik: On Monday, the Supreme Court declined to take up a lawsuit in which artist and computer scientist Stephen Thaler tried to copyright an artwork that he acknowledged had been created by an AI bot of his own invention. That left in place a ruling last year by the District of Columbia Court of Appeals, which held that art created by non-humans can't be copyrighted... [Judge Patricia A. Millett] cited longstanding regulations of the Copyright Office requiring that "for a work to be copyrightable, it must owe its origin to a human being"... She rejected Thaler's argument, as had the federal trial judge who first heard the case, that the Copyright Office's insistence that the author of a work must be human was unconstitutional. The Supreme Court evidently agreed... [Another AI-related case] involved one Bradley Heppner, who was indicted by a federal grand jury for allegedly looting $150 million from a financial services company he chaired. Heppner pleaded innocent and was released on $25-million bail. The case is pending.... Knowing that an indictment was in the offing, Heppner had consulted Claude for help on a defense strategy. His lawyers asserted that those exchanges, which were set forth in written memos, were tantamount to consultations with Heppner's lawyers; therefore, his lawyers said, they were confidential according to attorney-client privilege and couldn't be used against Heppner in court. (They also cited the related attorney work product doctrine, which grants confidentiality to lawyers' notes and other similar material.) That was a nontrivial point. Heppner had given Claude information he had learned from his lawyers, and shared Claude's responses with his lawyers. [Federal Judge Jed S.] Rakoff made short work of this argument. First, he ruled, the AI documents weren't communications between Heppner and his attorneys, since Claude isn't an attorney... Second, he wrote, the exchanges between Heppner and Claude weren't confidential. In its terms of use, Anthropic claims the right to collect both a user's queries and Claude's responses, use them to "train" Claude, and disclose them to others. Finally, he wasn't asking Claude for legal advice, but for information he could pass on to his own lawyers, or not. Indeed, when prosecutors tested Claude by asking whether it could give legal advice, the bot advised them to "consult with a qualified attorney." The columnist agrees AI-generated results shouldn't receive the same protections as human-generated material. "The AI bots are machines, and portraying them as though they're thinking creatures like artists or attorneys doesn't change that, and shouldn't." He also seems to think their output is at best second-hand regurgitation. "Everything an AI bot spews out is, at more than a fundamental level, the product of human creativity."

Read more of this story at Slashdot.

Read the whole story
Share this story
Delete

How Anthropic's Claude Helped Mozilla Improve Firefox's Security

1 Share
"It took Anthropic's most advanced artificial-intelligence model about 20 minutes to find its first Firefox browser bug during an internal test of its hacking prowess," reports the Wall Street Journal. The Anthropic team submitted it, and Firefox's developers quickly wrote back: This bug was serious. Could they get on a call? "What else do you have? Send us more," said Brian Grinstead, an engineer with Mozilla, Firefox's parent organization. Anthropic did. Over a two-week period in January, Claude Opus 4.6 found more high-severity bugs in Firefox than the rest of the world typically reports in two months, Mozilla said... In the two weeks it was scanning, Claude discovered more than 100 bugs in total, 14 of which were considered "high severity..." Last year, Firefox patched 73 bugs that it rated as either high severity or critical. A Mozilla blog post calls Firefox "one of the most scrutinized and security-hardened codebases on the web. Open source means our code is visible, reviewable, and continuously stress-tested by a global community." So they're impressed — and also thankful Anthropic provided test cases "that allowed our security team to quickly verify and reproduce each issue." Within hours, our platform engineers began landing fixes, and we kicked off a tight collaboration with Anthropic to apply the same technique across the rest of the browser codebase... . A number of the lower-severity findings were assertion failures, which overlapped with issues traditionally found through fuzzing, an automated testing technique that feeds software huge numbers of unexpected inputs to trigger crashes and bugs. However, the model also identified distinct classes of logic errors that fuzzers had not previously uncovered... We view this as clear evidence that large-scale, AI-assisted analysis is a powerful new addition in security engineers' toolbox. Firefox has undergone some of the most extensive fuzzing, static analysis, and regular security review over decades. Despite this, the model was able to reveal many previously unknown bugs. This is analogous to the early days of fuzzing; there is likely a substantial backlog of now-discoverable bugs across widely deployed software. "In the time it took us to validate and submit this first vulnerability to Firefox, Claude had already discovered fifty more unique crashing inputs" in 6,000 C++ files, Anthropic says in a blog post (which points out they've also used Claude Opus 4.6 to discover vulnerabilities in the Linux kernel). "Anthropic "also rolled out Claude Code Security, an automated code security testing tool, last month," reports Axios, noting the move briefly rattled cybersecurity stocks...

Read more of this story at Slashdot.

Read the whole story
Share this story
Delete

Workers Who Love 'Synergizing Paradigms' Might Be Bad at Their Jobs

1 Share
Cornell University makes an announcement. "Employees who are impressed by vague corporate-speak like 'synergistic leadership,' or 'growth-hacking paradigms' may struggle with practical decision-making, a new Cornell study reveals." Published in the journal Personality and Individual Differences, research by cognitive psychologist Shane Littrell introduces the Corporate Bullshit Receptivity Scale (CBSR), a tool designed to measure susceptibility to impressive-but-empty organizational rhetoric... Corporate BS seems to be ubiquitous - but Littrell wondered if it is actually harmful. To test this, he created a "corporate bullshit generator" that churns out meaningless but impressive-sounding sentences like, "We will actualize a renewed level of cradle-to-grave credentialing" and "By getting our friends in the tent with our best practices, we will pressure-test a renewed level of adaptive coherence." He then asked more than 1,000 office workers to rate the "business savvy" of these computer-generated BS statements alongside real quotes from Fortune 500 leaders... The results revealed a troubling paradox. Workers who were more susceptible to corporate BS rated their supervisors as more charismatic and "visionary," but also displayed lower scores on a portion of the study that tested analytic thinking, cognitive reflection and fluid intelligence. Those more receptive to corporate BS also scored significantly worse on a test of effective workplace decision-making. The study found that being more receptive to corporate bullshit was also positively linked to job satisfaction and feeling inspired by company mission statements. Moreover, those who were more likely to fall for corporate BS were also more likely to spread it. Essentially, the employees most excited and inspired by "visionary" corporate jargon may be the least equipped to make effective, practical business decisions for their companies.

Read more of this story at Slashdot.

Read the whole story
Share this story
Delete
Next Page of Stories